8 research outputs found

    Pricing Link by Time

    Get PDF
    The combination of loss-based TCP and drop-tail routers often results in full buffers, creating large queueing delays. The challenge with parameter tuning and the drastic consequence of improper tuning have discouraged network administrators from enabling AQM even when routers support it. To address this problem, we propose a novel design principle for AQM, called the pricing-link-by-time (PLT) principle. PLT increases the link price as the backlog stays above a threshold β, and resets the price once the backlog goes below β. We prove that such a system exhibits cyclic behavior that is robust against changes in network environment and protocol parameters. While β approximately controls the level of backlog, the backlog dynamics are invariant for β across a wide range of values. Therefore, β can be chosen to reduce delay without undermining system performance. We validate these analytical results using packet-level simulation

    From Tuberculosis Bedside to Bench: UBE2B Splicing as a Potential Biomarker and Its Regulatory Mechanism

    Get PDF
    Alternative splicing (AS) is an important approach for pathogens and hosts to remodel transcriptome. However, tuberculosis (TB)-related AS has not been sufficiently explored. Here we presented the first landscape of TB-related AS by long-read sequencing, and screened four AS events (S100A8-intron1-retention intron, RPS20-exon1-alternaitve promoter, KIF13B-exon4-skipping exon (SE) and UBE2B-exon7-SE) as potential biomarkers in an in-house cohort-1. The validations in an in-house cohort-2 (2274 samples) and public datasets (1557 samples) indicated that the latter three AS events are potential promising biomarkers for TB diagnosis, but not for TB progression and prognosis. The excellent performance of classifiers further underscored the diagnostic value of these three biomarkers. Subgroup analyses indicated that UBE2B-exon7-SE splicing was not affected by confounding factors and thus had relatively stable performance. The splicing of UBE2B-exon7-SE can be changed by heat-killed mycobacterium tuberculosis through inhibiting SRSF1 expression. After heat-killed mycobacterium tuberculosis stimulation, 231 ubiquitination proteins in macrophages were differentially expressed, and most of them are apoptosis-related proteins. Taken together, we depicted a global TB-associated splicing profile, developed TB-related AS biomarkers, demonstrated an optimal application scope of target biomarkers and preliminarily elucidated mycobacterium tuberculosis-host interaction from the perspective of splicing, offering a novel insight into the pathophysiology of TB

    Congestion control for transmission control protocol (TCP) in wirelessnetworks

    No full text
    The best MPhil thesis in the Faculties of Dentistry, Engineering, Medicine and Science (University of Hong Kong), Li Ka Shing Prize,2010-11.published_or_final_versionElectrical and Electronic EngineeringMasterMaster of Philosoph

    Systematic design of internet congestion control : theory and algorithms

    No full text
    The Internet is dynamically shared by numerous flows of data traffic. Network congestion occurs when the aggregate flow rate persistently exceeds the network capacity, leading to excessive delivery delay and loss of user data. To control network congestion, a flow needs to adapt the sending rate to its inferred level of congestion, and a packet switch needs to report its local level of congestion. In this framework of Internet congestion control, it is important for flows to react promptly against congestion, and robustly against interfering network events resembling congestion. This is challenging due to the highly dynamic interactions of various network components over a global scale. Prior approaches rely predominantly on empirical observations in experiments for constructing and validating designs. However, without a careful, systematic examination of all viable options, more efficient designs may be overlooked. Moreover, experimental results have limited applicability to scenarios beyond the specific experimental settings. In this thesis, I employ a novel, systematic design approach. I formalize the design process of Internet congestion control from a minimal set of empirical observations. I prove the robustness and optimality of the attained design in general settings, and validate these properties in practical experimental settings. First, I develop a systematic method for enhancing the robustness of flows against interfering events resembling congestion. The class of additive-increase-multiplicative-decrease (AIMD) algorithms in Transmission Control Protocol (TCP) is the set of dominant algorithms governing the flow rate adaptation process. Over the present Internet, packet reordering and non-congestive loss occur frequently and are misinterpreted by TCP AIMD as packet loss due to congestion. This leads to underutilization of network resources. With a complete, formal characterization of the design space of TCP AIMD, I formulate designing wireless TCP AIMD as an optimal control problem over this space. The derived optimal algorithm attains a significant performance improvement over existing enhancements in packet-level simulation. Second, I propose a novel design principle, known as pricing-link-by-time (PLT), that specifies how to set the measure of congestion, or “link price”, at a router to provide prompt feedback to flows. Existing feedback mechanisms require sophisticated parameter tuning, and experience drastic performance degradation with improperly tuned parameters. PLT makes parameter tuning a simple, optional process. It increases the link price as the backlog stays above a threshold value, and resets the price once the backlog goes below the threshold. I prove that such a system exhibits cyclic behavior that is robust against changes in network environment and protocol parameters. Moreover, changing the threshold value can control delay without undermining system performance. I validate these analytical results using packet-level simulation. The incremental deployment of various enhancements have made Internet congestion control highly heterogeneous. The final part of the thesis studies this issue by analyzing the competition among flows with heterogeneous robustness against interfering network events. While rigorous theories have been a major vehicle for understanding system designs, the thesis involves them directly in the design process. This systematic design approach can fully exploit the structural characteristics, and lead to generally applicable, effective solutions.published_or_final_versionElectrical and Electronic EngineeringDoctoralDoctor of Philosoph

    The redistribution of power flow in cascading failures

    No full text
    Understanding the redistribution of power flow is crucial to understanding the dynamics of cascading failures. Such redistribution is complicated, with monotonicity being the exception rather than the norm. We study the monotonicity of a quadratic function of branch power flow with respect to link failure and load shedding, respectively. The quadratic function can be considered as a measure of the aggregate network loading. We show that the value of this measure increases when (more) link failure occurs. On the other hand, while arbitrary load shedding can increase the measure value, we establish the existence of load shedding that can guarantee its reduction. Utilizing these monotonicity properties, we show that the failure of a link will cause the power flow over its adjacent link to have a change in the same direction (away or towards their commonly incident bus) as the original flow over the failed link

    Pricing link by time

    No full text
    corecore